Multi-Instance Dimensionality Reduction via Sparsity and Orthogonality
نویسندگان
چکیده
منابع مشابه
Multi-Instance Dimensionality Reduction
Multi-instance learning deals with problems that treat bags of instances as training examples. In single-instance learning problems, dimensionality reduction is an essential step for high-dimensional data analysis and has been studied for years. The curse of dimensionality also exists in multiinstance learning tasks, yet this difficult task has not been studied before. Direct application of exi...
متن کاملMulti-Sensor Fusion via Reduction of Dimensionality
Large high-dimensional datasets are becoming more and more popular in an increasing number of research areas. Processing the high dimensional data incurs a high computational cost and is inherently inefficient since many of the values that describe a data object are redundant due to noise and inner correlations. Consequently, the dimensionality, i.e. the number of values that are used to descri...
متن کاملOrthogonality and Dimensionality
In this article, we present what we believe to be a simple way to motivate the use of Hilbert spaces in quantum mechanics. To achieve this, we study the way the notion of dimension can, at a very primitive level, be defined as the cardinality of a maximal collection of mutually orthogonal elements (which, for instance, can be seen as spatial directions). Following this idea, we develop a formal...
متن کاملDimensionality reduction via discretization
The existence of numeric data and large amounts of records in a database pose a challenging task to explicit concepts extraction from the raw data. This paper introduces a method that reduces data vertically and horizontally, keeps the discriminating power of the original data, and paves the way for extracting concepts. The method is based on discretization (vertical reduction) and feature sele...
متن کاملMulti-View Dimensionality Reduction via Canonical Correlation Analysis
We analyze the multi-view regression problem where we have two views X = (X, X) of the input data and a target variable Y of interest. We provide sufficient conditions under which we can reduce the dimensionality of X (via a projection) without loosing predictive power of Y . Crucially, this projection can be computed via a Canonical Correlation Analysis only on the unlabeled data. The algorith...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Neural Computation
سال: 2018
ISSN: 0899-7667,1530-888X
DOI: 10.1162/neco_a_01140